Skip to content

Conversation

@stephenplusplus
Copy link
Contributor

  • Allow a choice in upload and createWriteStream between simple/resumable upload technique
  • upload: Stat the incoming file for size, default to simple for < 5mb, resumable for > 5mb
  • createWriteStream: default to resumable uploads
  • Integrate { resumableThreshold: n } option on storage instantiation (defaults to 5mb)
  • test integrity upload stream
  • finalize error messaging & objects

Fixes #298

createWriteStream uses the Resumable Upload API: http://goo.gl/jb0e9D.

The process involves these steps:

  1. POST the file's metadata. We get a resumable upload URI back, then cache it with ConfigStore.
  2. PUT data to that URI with a Content-Range header noting what position the data is beginning from. We also cache, at most, the first 16 bytes of the data being uploaded.
  3. Delete the ConfigStore cache after the upload completes.

If the initial upload operation is interrupted, the next time the user uploads the file, these steps occur:

  1. Detect the presence of a cached URI in ConfigStore.
  2. Make an empty PUT request to that URI to get the last byte written to the remote file.
  3. PUT data to the URI starting from the first byte after the last byte returned from the call above.

If the user tries to upload entirely different data to the remote file:

  1. -- same as above --
  2. -- same as above --
  3. -- same as above --
  4. Compare the first chunk of the new data with the chunk in cache. If it's different, start a new resumable upload (Step 1 of the first example).

@stephenplusplus stephenplusplus force-pushed the spp--storage-resumable-uploads branch 3 times, most recently from e5a72fe to ba2e8cf Compare November 11, 2014 22:05

This comment was marked as spam.

@stephenplusplus stephenplusplus force-pushed the spp--storage-resumable-uploads branch 2 times, most recently from a665191 to f76c406 Compare November 12, 2014 19:13

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

@ryanseys
Copy link
Contributor

Overall this looks good. No big issues. RETRY_LIMIT might want to be increased if we put in exponential backoff as suggested. 5 seems like a more sane default (as suggested here).

This comment was marked as spam.

This comment was marked as spam.

@stephenplusplus
Copy link
Contributor Author

Recent best practices have emerged, so I figured we should put these in place before merging. I've added a task list in the initial post with the intended revisions so far; more will likely be coming.

One of the revisions is allowing a user to specify a preference of a simple or resumable upload. I'm seeking opinions on converting the upload method to a different signature than what we have currently.

Current:

myBucket.upload("./photo.jpg", myFile, { my: "metadata" }, function (err, file) {})

Suggested:

myBucket.upload("./photo.jpg", {
  destination: myFile,
  metadata: {
    my: "metadata"
  },
  resumable: (true || false)
}, function (err, file) {})

file.createWriteStream() will also need this functionality added.

Current:

myFile.createWriteStream({ my: "metadata" })

Suggested:

myFile.createWriteStream({
  resumable: false, // default: true
  metadata: {
    my: "metadata"
  }
})

Any better ideas?

@silvolu
Copy link
Contributor

silvolu commented Nov 20, 2014

Looks good, but I'd like the user to be able to change the resumable_threshold (that defaults to 5MB). Could we expose a configuration for storage, or add setters for similar values? In the future we might need it for the chunk size as well, and we could use it to allow the user to change the default for createWriteStream at a global level.

@stephenplusplus
Copy link
Contributor Author

Config on the storage level makes sense to me.

var gcloud = require("gcloud")({ /* conn info */ })
gcloud.storage({ resumableThreshold: n })

&

var gcloud = require("gcloud")
var storage = gcloud.storage({ /* conn info */, resumableThreshold: n  })
  1. What format do we accept for n? (bytes, kb, mb?)
  2. Is it ok to rely on our docs to explain resumableThreshold won't have an effect on createWriteStream uploads? I can see that being a point of confusion

@ryanseys
Copy link
Contributor

What format do we accept for n? (bytes, kb, mb?)

Bytes. The header is in bytes, so this seems like a simple choice.

Is it ok to rely on our docs to explain resumableThreshold won't have an effect on createWriteStream uploads?

Wait, why not?

@stephenplusplus
Copy link
Contributor Author

In a stream, we can't stat a file for its size. It comes to us in small kb chunks, meaning we don't know if it's over a threshold until after we've already formed the request.

I suppose if we wanted to, we could buffer n threshold into memory before beginning the request (which is the time we have to say resumable vs simple), but that seems like a dangerous approach.

& +1 on bytes.

@ryanseys
Copy link
Contributor

Fair enough. Plus you don't really know that the readable stream is a file at all. That being said, should resumable even work with streams unless they explicitly give us the filename to use?

@stephenplusplus
Copy link
Contributor Author

That's a great question, but I think it's impossible to answer. Still, I anticipate resumable will be a desirable default, and speaking technically, we have a solution for if we resume an upload, but are sent different data than we were originally (we bail and start a new upload).

And in any case, the user knows best what they are doing, so we [will] allow them to be explicit about what type of upload to use at the time of the upload.

@ryanseys
Copy link
Contributor

Can we get access to the readable stream that is piping their data to our writable stream? In theory, if we can, we could try to yank the fd (file descriptor) from it and then we could sneakily stat the file to find out its name and size? This is a total shot in the dark though.

@stephenplusplus
Copy link
Contributor Author

With a stream, we should only be aware of the data coming in, and not about how/where/etc. It would also be a bit magical if we tried to implement something like that. And usually, whenever there's magic, the solution is to add an option or variation of the method that gives the user explicit control of the outcome. We will have both of those things ({ resumable: false } and bucket.upload)

@ryanseys
Copy link
Contributor

Yeah, that would be too much magic, agreed. Getting back to the original question, I think that it's safe to say that if the developer is uploading using a stream, they know that resumableThreshold won't work. I would only expect it to work if we explicitly give the filename i.e. like in .upload(), so if you can do better than that, that's exceeding expectations in my mind.

@stephenplusplus stephenplusplus force-pushed the spp--storage-resumable-uploads branch from 7bd17d9 to 7e1dde8 Compare November 20, 2014 17:13
sofisl pushed a commit that referenced this pull request Nov 18, 2022
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [@types/mocha](https://togithub.com/DefinitelyTyped/DefinitelyTyped) | devDependencies | major | [`^7.0.0` -> `^8.0.0`](https://renovatebot.com/diffs/npm/@types%2fmocha/7.0.2/8.0.0) |

---

### Renovate configuration

:date: **Schedule**: "after 9am and before 3pm" (UTC).

:vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

:recycle: **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

:no_bell: **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#googleapis/nodejs-security-center).
sofisl pushed a commit that referenced this pull request Jan 26, 2023
🤖 I have created a release *beep* *boop*
---


## [2.0.0](googleapis/nodejs-ai-platform@v1.19.0...v2.0.0) (2022-06-23)


### ⚠ BREAKING CHANGES

* update library to use Node 12 (#304)

### Features

* add ConvexAutomatedStoppingSpec to StudySpec in aiplatform v1beta1 study.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add display_name and metadata to ModelEvaluation in aiplatform model_evaluation.proto ([#297](googleapis/nodejs-ai-platform#297)) ([1e6dcb6](googleapis/nodejs-ai-platform@1e6dcb6))
* add Examples to Explanation related messages in aiplatform v1beta1 explanation.proto ([#307](googleapis/nodejs-ai-platform#307)) ([c69ac2b](googleapis/nodejs-ai-platform@c69ac2b))
* add IAM policy to aiplatform_v1beta1.yaml ([#308](googleapis/nodejs-ai-platform#308)) ([6557767](googleapis/nodejs-ai-platform@6557767))
* add JOB_STATE_UPDATING to JobState in aiplatform v1beta1 job_state.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add LatestMonitoringPipelineMetadata to ModelDeploymentMonitoringJob in aiplatform v1beta1 model_deployment_monitoring_job.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add ListModelVersion, DeleteModelVersion, and MergeVersionAliases rpcs to aiplatform v1beta1 model_service.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add MfsMount in aiplatform v1beta1 machine_resources.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add model_id and parent_model to TrainingPipeline in aiplatform v1beta1 training_pipeline.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add model_version_id to DeployedModel in aiplatform v1beta1 endpoint.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add model_version_id to PredictResponse in aiplatform v1beta1 prediction_service.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add model_version_id to UploadModelRequest and UploadModelResponse in aiplatform v1beta1 model_service.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add nfs_mounts to WorkPoolSpec in aiplatform v1beta1 custom_job.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add PredictRequestResponseLoggingConfig to aiplatform v1beta1 endpoint.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add reserved_ip_ranges to CustomJobSpec in aiplatform v1 custom_job.proto ([#286](googleapis/nodejs-ai-platform#286)) ([863748a](googleapis/nodejs-ai-platform@863748a))
* add reserved_ip_ranges to CustomJobSpec in aiplatform v1beta1 custom_job.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* add version_id to Model in aiplatform v1beta1 model.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* rename Similarity to Examples, and similarity to examples in ExplanationParameters in aiplatform v1beta1 explanation.proto ([863748a](googleapis/nodejs-ai-platform@863748a))
* **samples:** add create-featurestore samples ([#317](googleapis/nodejs-ai-platform#317)) ([5876d81](googleapis/nodejs-ai-platform@5876d81))


### Bug Fixes

* added retries to flaky test ([#299](googleapis/nodejs-ai-platform#299)) ([ffc9a3f](googleapis/nodejs-ai-platform@ffc9a3f))


### Build System

* update library to use Node 12 ([#304](googleapis/nodejs-ai-platform#304)) ([0679cda](googleapis/nodejs-ai-platform@0679cda))

---
This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
miguelvelezsa pushed a commit that referenced this pull request Jul 23, 2025
miguelvelezsa pushed a commit that referenced this pull request Jan 14, 2026
miguelvelezsa pushed a commit that referenced this pull request Jan 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

storage: resumable uploads

3 participants